Improving TCP latency with super-packets
نویسنده
چکیده
The vast majority of today’s Internet traffic uses TCP connections whose lengths range from a few packets to many orders of magnitude more. Even though short flows are prevalent in web traffic and their completion time can directly impact user experience, these flows are generally not prioritized and can experience drops because of the load caused by longer TCP flows on routers. In this paper, we describe a novel way to improve the latency of these flows. One possible solution to lower latency is to increase the Maximum Segment Size (MSS), which would allow transmitting as much information using fewer packets. However, this impairs efficient packet multiplexing by routers. Another approach to reduce latency is to increase TCP’s initial window size, which also allows sending more data at once. However, the benefits of this for short flows can be wiped out if the flow experiences even a single packet drop as this will cause retransmissions and in some cases timeouts. In this paper, we propose the concept of super-packets to deal with this “all-or-nothing” property of flows. Super-packets are a collection of packets whose collective performance is critical for applications. End-hosts explicitly request reservation of buffer space for super-packets in routers who in turn look at their buffer utilization and guarantee forwarding of all packets in the given super-packet. This paper proposes a simple and efficient algorithm for such reservations in routers. We use simulations to demonstrate how the use of super-packets has the same effect as increasing the MSS without the downsides. Extensive evaluation of an implementation on the Click Modular Router shows that super-packets can improve average latency of flows and also effectively solve the problem of incast. Additionally, our results demonstrate that super-packets can prioritize short flows without the need for explicit priorities.
منابع مشابه
Benefits and Practicality of Super-Packets
TCP’s slow-start was designed to prevent hosts from sending more data into the network than it is capable of transmitting. A key component of slow-start is the initial congestion window or init_cwnd. Today, it is set to at most four segments or roughly 4KB of data. As a result, every short-lived TCP connection (henceforth “short flow”) suffers a performance hit when it comes to throughput (and ...
متن کاملFavorQueue: A parameterless active queue management to improve TCP traffic performance
This paper presents and analyses the implementation of a novel active queue management (AQM) named FavorQueue that aims to improve delay transfer of short lived TCP flows over best-effort networks. The idea is to dequeue packets that do not belong to a flow previously enqueued first. The rationale is to mitigate the delay induced by long-lived TCP flows over the pace of short TCP data requests ...
متن کاملAn Extended Model for TCP Loss Recovery Latency with Random Packet Losses
It has been a very important issue to evaluate the performance of transmission control protocol (TCP), and the importance is still growing up because TCP will be deployed more widely in future wireless as well as wireline networks. It is also the reason why there have been a lot of efforts to analyze TCP performance more accurately. Most of these works are focusing on overall TCP end-to-end thr...
متن کاملImproving Tor security against timing and traffic analysis attacks with fair randomization
The Tor network is probably one of the most popular online anonymity systems in the world. It has been built based on the volunteer relays from all around the world. It has a strong scientific basis which is structured very well to work in low latency mode that makes it suitable for tasks such as web browsing. Despite the advantages, the low latency also makes Tor insecure against timing and tr...
متن کاملSupporting Low Latency TCP-Based Media Streams
The dominance of the TCP protocol on the Internet and its success in maintaining Internet stability has led to several TCP-based stored media-streaming approaches. The success of these approaches raises the question whether TCP can be used for low-latency streaming. Low latency streaming allows responsive control operations for media streaming and can make interactive applications feasible. We ...
متن کامل